Goto

Collaborating Authors

 query budget




Fig . 1 Performance query budget on Cora

Neural Information Processing Systems

We thank all the reviewers for their constructive feedback. Reviewer #1: (1) Number of labeled nodes to train the policy network. ANRMAB, at least a moderate number of labeled data are required. We observe similar trends to the results in Section 4.4 (Paper). We have compared classification performance w.r.t.






Fig . 1 Performance query budget on Cora

Neural Information Processing Systems

We thank all the reviewers for their constructive feedback. Reviewer #1: (1) Number of labeled nodes to train the policy network. ANRMAB, at least a moderate number of labeled data are required. We observe similar trends to the results in Section 4.4 (Paper). We have compared classification performance w.r.t.


I Stolenly Swear That I Am Up to (No) Good: Design and Evaluation of Model Stealing Attacks

Oliynyk, Daryna, Mayer, Rudolf, Grosse, Kathrin, Rauber, Andreas

arXiv.org Artificial Intelligence

Model stealing attacks endanger the confidentiality of machine learning models offered as a service. Although these models are kept secret, a malicious party can query a model to label data samples and train their own substitute model, violating intellectual property. While novel attacks in the field are continually being published, their design and evaluations are not standardised, making it challenging to compare prior works and assess progress in the field. This paper is the first to address this gap by providing recommendations for designing and evaluating model stealing attacks. To this end, we study the largest group of attacks that rely on training a substitute model -- those attacking image classification models. We propose the first comprehensive threat model and develop a framework for attack comparison. Further, we analyse attack setups from related works to understand which tasks and models have been studied the most. Based on our findings, we present best practices for attack development before, during, and beyond experiments and derive an extensive list of open research questions regarding the evaluation of model stealing attacks. Our findings and recommendations also transfer to other problem domains, hence establishing the first generic evaluation methodology for model stealing attacks.